Doing the Impossible: Why Neural Networks Can Be Trained at All

نویسندگان
چکیده

برای دانلود رایگان متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Recurrent neural networks can be trained to be maximum a posteriori probability classifiers

This paper proves that supervised learning algorithms used to train recurrent neural networks have an equilibrium point when the network implements a Maximum A Posteriori Probability (MAP) classiier. The result holds as a limit when the size of the training set goes to innnity. The result is general, since it stems as a property of cost minimizing algorithms, but to prove it we implicitly assum...

متن کامل

Why aren't we all doing ultrasound?

Musculoskeletal ultrasound (MUS) is potentially the most exciting development in clinical rheumatology practice in recent years. It is readily accessible, patient friendly and relatively inexpensive. It can be used to detect small joint effusions and thus increase the accuracy of diagnostic aspiration and therapeutic injection [1]. It is more sensitive than clinical examination in detecting ent...

متن کامل

Instantaneously Trained Neural Networks

This paper presents a review of instantaneously trained neural networks (ITNNs). These networks trade learning time for size and, in the basic model, a new hidden node is created for each training sample. Various versions of the cornerclassification family of ITNNs, which have found applications in artificial intelligence (AI), are described. Implementation issues are also considered.

متن کامل

Why Information can be Free

This paper describes a model that demonstrates that sharing knowledge can be adaptive purely for its own sake. This is despite the fact that sharing knowledge costs the speaker in terms of foraging opportunities, and that initially the majority of the population consists of free-riders who listen but do not speak. The population is able to take advantage of the increased carrying capacity of th...

متن کامل

The empirical size of trained neural networks

ReLU neural networks define piecewise linear functions of their inputs. However, initializing and training a neural network is very different from fitting a linear spline. In this paper, we expand empirically upon previous theoretical work to demonstrate features of trained neural networks. Standard network initialization and training produce networks vastly simpler than a naive parameter count...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

ژورنال

عنوان ژورنال: Frontiers in Psychology

سال: 2018

ISSN: 1664-1078

DOI: 10.3389/fpsyg.2018.01185